In mathematics, a series is, roughly speaking, an addition of Infinity many Addend, one after the other. The study of series is a major part of calculus and its generalization, mathematical analysis. Series are used in most areas of mathematics, even for studying finite structures in combinatorics through generating functions. The mathematical properties of infinite series make them widely applicable in other quantitative disciplines such as physics, computer science, statistics and finance.
Among the Ancient Greece, the idea that a potentially infinite summation could produce a finite result was considered , most famously in Zeno's paradoxes. Nonetheless, infinite series were applied practically by Ancient Greek mathematicians including Archimedes, for instance in the quadrature of the parabola. The mathematical side of Zeno's paradoxes was resolved using the concept of a limit during the 17th century, especially through the early calculus of Isaac Newton. The resolution was made more rigorous and further improved in the 19th century through the work of Carl Friedrich Gauss and Augustin-Louis Cauchy, among others, answering questions about which of these sums exist via the completeness of the real numbers and whether series terms can be rearranged or not without changing their sums using absolute convergence and conditional convergence of series.
In modern terminology, any ordered infinite sequence of terms, whether those terms are numbers, functions, matrices, or anything else that can be added, defines a series, which is the addition of the one after the other. To emphasize that there are an infinite number of terms, series are often also called infinite series to contrast with finite series, a term sometimes used for summation. Series are represented by an expression like or, using capital-sigma summation notation,
The infinite sequence of additions expressed by a series cannot be explicitly performed in sequence in a finite amount of time. However, if the terms and their finite sums belong to a set that has limits, it may be possible to assign a value to a series, called the sum of the series. This value is the limit as tends to infinity of the finite sums of the first terms of the series if the limit exists.
The expression denotes both the series—the implicit process of adding the terms one after the other indefinitely—and, if the series is convergent, the sum of the series—the explicit limit of the process. This is a generalization of the similar convention of denoting by both the addition—the process of adding—and its result—the sum of and .
Commonly, the terms of a series come from a ring, often the field of the or the field of the . If so, the set of all series is also itself a ring, one in which the addition consists of adding series terms together term by term and the multiplication is the Cauchy product.
It is also common to express series using a few first terms, an ellipsis, a general term, and then a final ellipsis, the general term being an expression of the th term as a function of : For example, Euler's number can be defined with the series where denotes the product of the first , and is conventionally equal to
Some authors directly identify a series with its sequence of partial sums. Either the sequence of partial sums or the sequence of terms completely characterizes the series, and the sequence of terms can be recovered from the sequence of partial sums by taking the differences between consecutive elements,
Partial summation of a sequence is an example of a linear sequence transformation, and it is also known as the prefix sum in computer science. The inverse transformation for recovering a sequence from its partial sums is the finite difference, another linear sequence transformation.
Partial sums of series sometimes have simpler closed form expressions, for instance an arithmetic series has partial sums and a geometric series has partial sums if or simply if .
An example of a convergent series is the geometric series
It can be shown by algebraic computation that each partial sum is As one has the series is convergent and converges to with truncation errors .
By contrast, the geometric series is divergent in the . However, it is convergent in the extended real number line, with as its limit and as its truncation error at every step.
When a series's sequence of partial sums is not easily calculated and evaluated for convergence directly, convergence tests can be used to prove that the series converges or diverges.
For example, Grandi's series has a sequence of partial sums that alternates back and forth between and and does not converge. Grouping its elements in pairs creates the series which has partial sums equal to zero at every term and thus sums to zero. Grouping its elements in pairs starting after the first creates the series which has partial sums equal to one for every term and thus sums to one, a different result.
In general, grouping the terms of a series creates a new series with a sequence of partial sums that is a subsequence of the partial sums of the original series. This means that if the original series converges, so does the new series after grouping: all infinite subsequences of a convergent sequence also converge to the same limit. However, if the original series diverges, then the grouped series do not necessarily diverge, as in this example of Grandi's series above. However, divergence of a grouped series does imply the original series must be divergent, since it proves there is a subsequence of the partial sums of the original series which is not convergent, which would be impossible if it were convergent. This reasoning was applied in Oresme's proof of the divergence of the harmonic series, and it is the basis for the general Cauchy condensation test.
However, as for grouping, an infinitary rearrangement of terms of a series can sometimes lead to a change in the limit of the partial sums of the series. Series with sequences of partial sums that converge to a value but whose terms could be rearranged to a form a series with partial sums that converge to some other value are called conditionally convergent series. Those that converge to the same value regardless of rearrangement are called unconditionally convergent series.
For series of real numbers and complex numbers, a series is unconditionally convergent if and only if the series summing the Absolute value of its terms, is also convergent, a property called absolute convergence. Otherwise, any series of real numbers or complex numbers that converges but does not converge absolutely is conditionally convergent. Any conditionally convergent sum of real numbers can be rearranged to yield any other real number as a limit, or to diverge. These claims are the content of the Riemann series theorem.
A historically important example of conditional convergence is the alternating harmonic series,
which has a sum of the natural logarithm of 2, while the sum of the absolute values of the terms is the harmonic series, which diverges per the divergence of the harmonic series, so the alternating harmonic series is conditionally convergent. For instance, rearranging the terms of the alternating harmonic series so that each positive term of the original series is followed by two negative terms of the original series rather than just one yields which is times the original series, so it would have a sum of half of the natural logarithm of 2. By the Riemann series theorem, rearrangements of the alternating harmonic series to yield any other real number are also possible.
Using the symbols and for the partial sums of the added series and for the partial sums of the resulting series, this definition implies the partial sums of the resulting series follow Then the sum of the resulting series, i.e., the limit of the sequence of partial sums of the resulting series, satisfies when the limits exist. Therefore, first, the series resulting from addition is summable if the series added were summable, and, second, the sum of the resulting series is the addition of the sums of the added series. The addition of two divergent series may yield a convergent series: for instance, the addition of a divergent series with a series of its terms times will yield a series of all zeros that converges to zero. However, for any two series where one converges and the other diverges, the result of their addition diverges.
For series of real numbers or complex numbers, series addition is associative, commutative, and invertible. Therefore series addition gives the sets of convergent series of real numbers or complex numbers the structure of an abelian group and also gives the sets of all series of real numbers or complex numbers (regardless of convergence properties) the structure of an abelian group.
Using the symbols for the partial sums of the original series and for the partial sums of the series after multiplication by , this definition implies that for all and therefore also when the limits exist. Therefore if a series is summable, any nonzero scalar multiple of the series is also summable and vice versa: if a series is divergent, then any nonzero scalar multiple of it is also divergent.
Scalar multiplication of real numbers and complex numbers is associative, commutative, invertible, and it distributes over series addition.
In summary, series addition and scalar multiplication gives the set of convergent series and the set of series of real numbers the structure of a real vector space. Similarly, one gets complex vector spaces for series and convergent series of complex numbers. All these vector spaces are infinite dimensional.
Series multiplication of absolutely convergent series of real numbers and complex numbers is associative, commutative, and distributes over series addition. Together with series addition, series multiplication gives the sets of absolutely convergent series of real numbers or complex numbers the structure of a Commutative ring ring, and together with scalar multiplication as well, the structure of a commutative algebra; these operations also give the sets of all series of real numbers or complex numbers the structure of an associative algebra.
For example, the series is convergent and absolutely convergent because for all and a telescoping sum argument implies that the partial sums of the series of those non-negative bounding terms are themselves bounded above by 2. The exact value of this series is ; see Basel problem.
This type of bounding strategy is the basis for general series comparison tests. First is the general direct comparison test: For any series , If is an absolutely convergent series such that for some positive real number and for sufficiently large , then converges absolutely as well. If diverges, and for all sufficiently large , then also fails to converge absolutely, although it could still be conditionally convergent, for example, if the alternate in sign. Second is the general limit comparison test: If is an absolutely convergent series such that for sufficiently large , then converges absolutely as well. If diverges, and for all sufficiently large , then also fails to converge absolutely, though it could still be conditionally convergent if the vary in sign.
Using comparisons to geometric series specifically, those two general comparison tests imply two further common and generally useful tests for convergence of series with non-negative terms or for absolute convergence of series with general terms. First is the ratio test: if there exists a constant such that for all sufficiently large , then converges absolutely. When the ratio is less than , but not less than a constant less than , convergence is possible but this test does not establish it. Second is the root test: if there exists a constant such that for all sufficiently large , then converges absolutely.
Alternatively, using comparisons to series representations of Integral specifically, one derives the integral test: if is a positive monotone decreasing function defined on the interval then for a series with terms for all , converges if and only if the integral is finite. Using comparisons to flattened-out versions of a series leads to Cauchy's condensation test: if the sequence of terms is non-negative and non-increasing, then the two series and are either both convergent or both divergent.
One important example of a test for conditional convergence is the alternating series test or Leibniz test: A series of the form with all is called alternating. Such a series converges if the non-negative sequence is monotone decreasing and converges to . The converse is in general not true. A famous example of an application of this test is the alternating harmonic series which is convergent per the alternating series test (and its sum is equal to ), though the series formed by taking the absolute value of each term is the ordinary harmonic series, which is divergent.
The alternating series test can be viewed as a special case of the more general Dirichlet's test: if is a sequence of terms of decreasing nonnegative real numbers that converges to zero, and is a sequence of terms with bounded partial sums, then the series converges. Taking recovers the alternating series test.
Abel's test is another important technique for handling semi-convergent series. If a series has the form where the partial sums of the series with terms , are bounded, has bounded variation, and exists: if and converges, then the series is convergent.
Other specialized convergence tests for specific types of series include the Dini test for Fourier series.
the following error evaluation holds (scaling and squaring method):Higham, N. J. (2008). Functions of matrices: theory and computation. Society for Industrial and Applied Mathematics.Higham, N. J. (2009). The scaling and squaring method for the matrix exponential revisited. SIAM review, 51(4), 747-764. How and How Not to Compute the Exponential of a Matrix
is pointwise convergent to a limit on a set if the series converges for each in as a series of real or complex numbers. Equivalently, the partial sums
converge to as goes to infinity for each in .
A stronger notion of convergence of a series of functions is uniform convergence. A series converges uniformly in a set if it converges pointwise to the function at every point of and the supremum of these pointwise errors in approximating the limit by the th partial sum,
converges to zero with increasing , of .
Uniform convergence is desirable for a series because many properties of the terms of the series are then retained by the limit. For example, if a series of continuous functions converges uniformly, then the limit function is also continuous. Similarly, if the are integral on a closed and bounded interval and converge uniformly, then the series is also integrable on and can be integrated term by term. Tests for uniform convergence include Weierstrass' M-test, Abel's uniform convergence test, Dini's test, and the Cauchy sequence.
More sophisticated types of convergence of a series of functions can also be defined. In measure theory, for instance, a series of functions converges almost everywhere if it converges pointwise except on a set of null set. Other modes of convergence depend on a different metric space structure on the Function space under consideration. For instance, a series of functions converges in mean to a limit function on a set if
A power series is a series of the form
The Taylor series at a point of a function is a power series that, in many cases, converges to the function in a neighborhood of . For example, the series
is the Taylor series of at the origin and converges to it for every .
Unless it converges only at , such a series converges on a certain open disc of convergence centered at the point in the complex plane, and may also converge at some of the points of the boundary of the disc. The radius of this disc is known as the radius of convergence, and can in principle be determined from the asymptotics of the coefficients . The convergence is uniform on closed set and bounded set (that is, compact set) subsets of the interior of the disc of convergence: to wit, it is uniformly convergent on compact sets.
Historically, mathematicians such as Leonhard Euler operated liberally with infinite series, even if they were not convergent. When calculus was put on a sound and correct foundation in the nineteenth century, rigorous proofs of the convergence of series were always required.
Even if the limit of the power series is not considered, if the terms support appropriate structure then it is possible to define operations such as addition, multiplication, derivative, antiderivative for power series "formally", treating the symbol "+" as if it corresponded to addition. In the most common setting, the terms come from a commutative ring, so that the formal power series can be added term-by-term and multiplied via the Cauchy product. In this case the algebra of formal power series is the total algebra of the monoid of natural numbers over the underlying term ring.: §III.2.11. If the underlying term ring is a differential algebra, then the algebra of formal power series is also a differential algebra, with differentiation performed term-by-term.
If such a series converges, then in general it does so in an annulus rather than a disc, and possibly some boundary points. The series converges uniformly on compact subsets of the interior of the annulus of convergence.
A Dirichlet series is one of the form
where is a complex number. For example, if all are equal to , then the sum of the Dirichlet series is the Riemann zeta function
Like the zeta function, Dirichlet series in general play an important role in analytic number theory. Generally a Dirichlet series converges if the real part of is greater than a number called the abscissa of convergence. In many cases, a function defined by a Dirichlet series is an analytic function that can be extended outside the domain of convergence of the series by analytic continuation. For example, the Dirichlet series for the zeta function converges absolutely when , but the zeta function can be extended to a holomorphic function defined on with a simple pole at .
This series can be directly generalized to general Dirichlet series.
The most important example of a trigonometric series is the Fourier series of a function.
An asymptotic series cannot necessarily be made to produce an answer as exactly as desired away from the asymptotic limit, the way that an ordinary convergent series of functions can. In fact, a typical asymptotic series reaches its best practical approximation away from the asymptotic limit after a finite number of terms; if more terms are included, the series will produce less accurate approximations.
Greek mathematician Archimedes produced the first known summation of an infinite series with a method that is still used in the area of calculus today. He used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, and gave a remarkably accurate approximation of π.
Mathematicians from the Kerala school were studying infinite series .
In the 17th century, James Gregory worked in the new decimal system on infinite series and published several Maclaurin series. In 1715, a general method for constructing the Taylor series for all functions for which they exist was provided by Brook Taylor. Leonhard Euler in the 18th century, developed the theory of hypergeometric series and q-series.
on which Gauss published a memoir in 1812. It established simpler criteria of convergence, and the questions of remainders and the range of convergence.
Cauchy (1821) insisted on strict tests of convergence; he showed that if two series are convergent their product is not necessarily so, and with him begins the discovery of effective criteria. The terms convergence and divergence had been introduced long before by Gregory (1668). Leonhard Euler and Gauss had given various criteria, and Colin Maclaurin had anticipated some of Cauchy's discoveries. Cauchy advanced the theory of power series by his expansion of a complex function in such a form.
Abel (1826) in his memoir on the binomial series
corrected certain of Cauchy's conclusions, and gave a completely scientific summation of the series for complex values of and . He showed the necessity of considering the subject of continuity in questions of convergence.
Cauchy's methods led to special rather than general criteria, and the same may be said of Raabe (1832), who made the first elaborate investigation of the subject, of De Morgan (from 1842), whose logarithmic test DuBois-Reymond (1873) and Pringsheim (1889) have shown to fail within a certain region; of Bertrand (1842), Bonnet (1843), Malmsten (1846, 1847, the latter without integration); Stokes (1847), Paucker (1852), Chebyshev (1852), and Arndt (1853).
General criteria began with Ernst Kummer (1835), and have been studied by Eisenstein (1847), Weierstrass in his various contributions to the theory of functions, Ulisse Dini (1867), DuBois-Reymond (1873), and many others. Pringsheim's memoirs (1889) present the most complete general theory.
Semi-convergent series were studied by Poisson (1823), who also gave a general form for the remainder of the Maclaurin formula. The most important solution of the problem is due, however, to Jacobi (1834), who attacked the question of the remainder from a different standpoint and reached a different formula. This expression was also worked out, and another one given, by Malmsten (1847). Schlömilch ( Zeitschrift, Vol.I, p. 192, 1856) also improved Jacobi's remainder, and showed the relation between the remainder and Bernoulli's function
Angelo Genocchi (1852) has further contributed to the theory.
Among the early writers was Wronski, whose "loi suprême" (1815) was hardly recognized until Arthur Cayley (1873) brought it into prominence.
Fourier (1807) set for himself a different problem, to expand a given function of in terms of the sines or cosines of multiples of , a problem which he embodied in his Théorie analytique de la chaleur (1822). Euler had already given the formulas for determining the coefficients in the series; Fourier was the first to assert and attempt to prove the general theorem. Poisson (1820–23) also attacked the problem from a different standpoint. Fourier did not, however, settle the question of convergence of his series, a matter left for Cauchy (1826) to attempt and for Dirichlet (1829) to handle in a thoroughly scientific manner (see convergence of Fourier series). Dirichlet's treatment ( Crelle, 1829), of trigonometric series was the subject of criticism and improvement by Riemann (1854), Heine, Rudolf Lipschitz, Schläfli, and du Bois-Reymond. Among other prominent contributors to the theory of trigonometric and Fourier series were Ulisse Dini, Charles Hermite, Halphen, Krause, Byerly and Appell.
If is a function from an index set to a set then the "series" associated to is the formal sum of the elements over the index elements denoted by the
When the index set is the natural numbers the function is a sequence denoted by A series indexed on the natural numbers is an ordered formal sum and so we rewrite as in order to emphasize the ordering induced by the natural numbers. Thus, we obtain the common notation for a series indexed by the natural numbers
Any sum over non-negative reals can be understood as the integral of a non-negative function with respect to the counting measure, which accounts for the many similarities between the two constructions.
When the supremum is finite then the set of such that is countable. Indeed, for every the cardinality of the set is finite because
Hence the set is Countable set.
If is countably infinite and enumerated as then the above defined sum satisfies
provided the value is allowed for the sum of the series.
Saying that the sum is the limit of finite partial sums means that for every neighborhood of the origin in there exists a finite subset of such that
Because is not Total order, this is not a limit of a sequence of partial sums, but rather of a net.
For every neighborhood of the origin in there is a smaller neighborhood such that It follows that the finite partial sums of an unconditionally summable family form a , that is, for every neighborhood of the origin in there exists a finite subset of such that
which implies that for every (by taking and ).
When is complete, a family is unconditionally summable in if and only if the finite sums satisfy the latter Cauchy net condition. When is complete and is unconditionally summable in then for every subset the corresponding subfamily is also unconditionally summable in
When the sum of a family of non-negative numbers, in the extended sense defined before, is finite, then it coincides with the sum in the topological group
If a family in is unconditionally summable then for every neighborhood of the origin in there is a finite subset such that for every index not in If is a first-countable space then it follows that the set of such that is countable. This need not be true in a general abelian topological group (see examples below).
By nature, the definition of unconditional summability is insensitive to the order of the summation. When is unconditionally summable, then the series remains convergent after any permutation of the set of indices, with the same sum,
Conversely, if every permutation of a series converges, then the series is unconditionally convergent. When is complete then unconditional convergence is also equivalent to the fact that all subseries are convergent; if is a Banach space, this is equivalent to say that for every sequence of signs , the series
converges in
It is called absolutely summable if in addition, for every continuous seminorm on the family is summable. If is a normable space and if is an absolutely summable family in then necessarily all but a countable collection of ’s are zero. Hence, in normed spaces, it is usually only ever necessary to consider series with countably many terms.
Summable families play an important role in the theory of .
More generally, convergence of series can be defined in any Abelian group Hausdorff space topological group. Specifically, in this case, converges to if the sequence of partial sums converges to
If is a seminormed space, then the notion of absolute convergence becomes: A series of vectors in converges absolutely if
in which case all but at most countably many of the values are necessarily zero.
If a countable series of vectors in a Banach space converges absolutely then it converges unconditionally, but the converse only holds in finite-dimensional Banach spaces (theorem of ).
and for a limit ordinal
if this limit exists. If all limits exist up to then the series converges.
|
|